# English text processing

Liberalmind V1.5 I1 GGUF
Apache-2.0
A quantized version of LiberalMind_v1.5, offering files of various quantization types and sizes for efficient use under different resource conditions.
Large Language Model Transformers English
L
mradermacher
513
1
Text To Cypher Gemma 3 27B Instruct 2025.04.0
Gemma 3 27B Instruct is a large language model developed by Google, fine-tuned on the neo4j/text2cypher-2025v1 dataset, specializing in converting natural language to Cypher query language.
Large Language Model Safetensors English
T
neo4j
178
5
Modernbert Base Emotions
A multi-category emotion classification model fine-tuned on ModernBERT-base, capable of recognizing 7 emotion labels
Text Classification Transformers English
M
cirimus
33
2
T5 Small Finetuned Samsum
A dialogue summarization model fine-tuned based on the T5 Small architecture, specifically optimized for the Samsung/samsum dataset, capable of generating concise and accurate dialogue summaries.
Text Generation TensorBoard
T
user10383
11
0
Text Summarize BartBaseCNN Finetune
MIT
A text summarization model based on the BART architecture, suitable for the abstract generation task of English scientific papers.
Text Generation Transformers English
T
GilbertKrantz
0
0
Deberta Zero Shot Classification
MIT
A zero-shot text classification model fine-tuned on DeBERTa-v3-base, suitable for scenarios with scarce labeled data or rapid prototyping.
Text Classification Transformers English
D
syedkhalid076
51
0
Stella En 400M V5 Cpu
MIT
stella_en_400M_v5_cpu is a model that performs excellently in multiple natural language processing tasks, especially in tasks such as classification, retrieval, clustering, and semantic text similarity.
Text Embedding
S
biggunnyso4
612
1
Deberta V3 Small Base Emotions Classifier
MIT
Fast Emotion-X is an emotion detection model fine-tuned on Microsoft's DeBERTa V3 Small model, which can accurately classify text into six emotion categories.
Text Classification Transformers English
D
AnkitAI
518
2
T5 Base Summarization Claim Extractor
A T5-based model specialized in extracting atomic claims from summary texts, serving as a key component in summary factuality assessment pipelines.
Text Generation Transformers English
T
Babelscape
666.36k
9
Roberta Large Zeroshot V2.0 C
MIT
A RoBERTa-large model designed for efficient zero-shot classification, trained on commercially friendly data, capable of performing text classification tasks without training data.
Text Classification Transformers English
R
MoritzLaurer
53
2
Tinyllama 15M
MIT
A 15-million-parameter Llama 2 architecture model trained on the TinyStories dataset
Large Language Model Transformers
T
nickypro
3,217
11
Bert Mini
MIT
One of the small BERT models released by Google Research, English-only, case-insensitive, trained with WordPiece masking.
Large Language Model Transformers English
B
lyeonii
263
1
E5 Small
MIT
E5-small is a compact sentence transformer model focused on sentence similarity and text embedding tasks, performing well on multiple classification and retrieval tasks.
Text Embedding English
E
intfloat
16.96k
41
Legacy1
BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained language model based on the Transformer architecture, developed by Google.
Large Language Model Transformers
L
khoon485
19
0
Ner Distilbert Base Uncased Ontonotesv5 Englishv4
Named entity recognition model based on distilbert-base-uncased architecture, fine-tuned on the conll2012_ontonotesv5-english-v4 dataset
Sequence Labeling Transformers
N
djagatiya
18
1
Bert Tiny Uncased
Apache-2.0
This is a tiny version of the BERT model, case-insensitive, suitable for natural language processing tasks in resource-constrained environments.
Large Language Model Transformers English
B
gaunernst
3,297
4
Test Ner
Apache-2.0
A BERT-based fine-tuned named entity recognition model achieving an F1 score of 95.91% on the evaluation set
Sequence Labeling Transformers
T
kSaluja
15
0
Bert Base Uncased Ganesh123
A fine-tuned version based on the BERT base model, specific use case and training data details are currently unknown
Large Language Model Transformers
B
stevems1
173
0
All MiniLM L6 V2
Apache-2.0
A lightweight sentence embedding model based on the MiniLM architecture that can map text to a 384-dimensional vector space, suitable for semantic search and clustering tasks.
Text Embedding English
A
optimum
171.02k
18
Data2vec Text Base
MIT
A general self-supervised learning framework pre-trained on English language using the data2vec objective, handling different modality tasks through a unified approach
Large Language Model Transformers English
D
facebook
1,796
12
Ner English Ontonotes Fast
Flair's built-in fast model for 18-class English named entity recognition, trained on the Ontonotes dataset
Sequence Labeling English
N
flair
23.94k
21
English Pert Base
PERT is a BERT-based pre-trained language model designed for English text processing tasks.
Large Language Model Transformers English
E
hfl
37
6
Pos English
Flair's built-in standard English POS tagging model, trained on the Ontonotes dataset with an F1 score of 98.19.
Sequence Labeling English
P
flair
24.83k
30
Bart Base Squad Qg No Answer
A question generation model based on the BART-base architecture, fine-tuned on the SQuAD dataset, capable of generating questions without answer information.
Question Answering System Transformers English
B
research-backup
15
0
Chunk English
Flair's built-in standard English phrase chunking model for identifying grammatical structures such as noun phrases and verb phrases in sentences.
Sequence Labeling English
C
flair
1,186
17
Distilbert Base Uncased Finetuned Mi
Apache-2.0
This model is a fine-tuned version of distilbert-base-uncased on an unspecified dataset, primarily used for text-related tasks.
Large Language Model Transformers
D
yancong
26
1
Gpt Neo 2.7B 8bit
MIT
This is a modified version of EleutherAI's GPT-Neo (2.7B parameter version) that supports text generation and model fine-tuning on Colab or equivalent desktop GPUs.
Large Language Model Transformers English
G
gustavecortal
99
11
Distilroberta Base
Apache-2.0
DistilRoBERTa is a lightweight distilled version of the RoBERTa model, retaining most of its performance while being smaller and faster.
Large Language Model Transformers English
D
typeform
37
0
Bert Base Uncased Amazon Polarity
Apache-2.0
A text classification model fine-tuned on the Amazon product review sentiment analysis dataset based on the BERT base model, achieving 94.65% accuracy
Text Classification Transformers
B
fabriceyhc
339
4
Autonlp Cola Gram 208681
This is a binary classification model trained on the AutoNLP platform for grammar correctness judgment.
Text Classification Transformers English
A
kamivao
16
0
Distilbert Base Cased Finetuned Conll03 English
Apache-2.0
A Named Entity Recognition model based on DistilBERT, fine-tuned on the CoNLL-2003 English dataset, suitable for case-sensitive text processing.
Sequence Labeling Transformers English
D
elastic
7,431
14
Declutr Base
Apache-2.0
DeCLUTR-base is a universal sentence encoder model trained through deep contrastive learning for generating high-quality text representations.
Text Embedding English
D
johngiorgi
99
7
Autonlp Imdb Test 21134442
This is a binary classification model trained via AutoNLP for IMDB review sentiment analysis
Text Classification Transformers English
A
mmcquade11
16
0
BERT Tweet Sentiment 100k 2eps
Apache-2.0
A sentiment analysis model fine-tuned based on bert-base-uncased for tweet sentiment classification
Text Classification Transformers
B
joe5campbell
14
0
Upos English
Flair's built-in English standard universal POS tagging model, trained on Ontonotes dataset with an F1 score of 98.6
Sequence Labeling English
U
flair
43
4
Typo Detector Distilbert En
A spelling error detection model based on the DistilBERT architecture, used to identify spelling errors in text
Sequence Labeling Transformers English
T
m3hrdadfi
25.05k
10
Bert Small2bert Small Finetuned Cnn Daily Mail Summarization
Apache-2.0
This is an encoder-decoder model based on the BERT-small architecture, specifically fine-tuned for summarization tasks on the CNN/Dailymail dataset.
Text Generation Transformers English
B
mrm8488
92
10
Quora Distilroberta Base
Apache-2.0
A cross-encoder model trained on the Quora duplicate questions dataset, used to predict the probability of whether two questions are duplicates
Text Classification English
Q
cross-encoder
28.01k
1
Distilbert Bert Summarization Cnn Dailymail
Text summarization model fine-tuned on the cnn_dailymail dataset, trained using the DistilBERT architecture
Text Generation Transformers
D
Ayham
19
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase